26 research outputs found

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Producing efficient error-bounded solutions for transition independent decentralized MDPs

    Get PDF
    pages 539-546International audienceThere has been substantial progress on algorithms for single-agent sequential decision making problems represented as partially observable Markov decision processes (POMDPs). A number of efficient algorithms for solving POMDPs share two desirable properties: error-bounds and fast convergence rates. Despite significant efforts, no algorithms for solving decentralized POMDPs benefit from these properties, leading to either poor solution quality or limited scalability. This paper presents the first approach for solving transition independent decentralized Markov decision processes (MDPs), that inherits these properties. Two related algorithms illustrate this approach. The first recasts the original problem as a finite-horizon deterministic and completely observable Markov decision process. In this form, the original problem is solved by combining heuristic search with constraint optimization to quickly converge into a near-optimal policy. This algorithm also provides the foundation for the first algorithm for solving infinite-horizon transition independent decentralized MDPs. We demonstrate that both methods outperform state-of-the-art algorithms by multiple orders of magnitude, and for infinite-horizon decentralized MDPs, the algorithm is able to construct more concise policies by searching cyclic policy graphs

    Optimally Solving Dec-POMDPs as Continuous-State MDPs

    Get PDF
    International audienceDecentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be decentralized. This new Dec-POMDP formulation , which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. To provide scalability, we refine this approach by combining heuristic search and compact representations that exploit the structure present in multi-agent domains, without losing the ability to converge to an optimal solution. In particular, we introduce a feature-based heuristic search value iteration (FB-HSVI) algorithm that relies on feature-based compact representations, point-based updates and efficient action selection. A theoretical analysis demonstrates that FB-HSVI terminates in finite time with an optimal solution. We include an extensive empirical analysis using well-known benchmarks, thereby demonstrating that our approach provides significant scalability improvements compared to the state of the art

    RĂ©solution exacte des Dec-POMDPs comme des MDPs continus

    Get PDF
    National audienceRésoudre optimalement des processus décisionnels de Markov partiellement observables et décentralisés (Dec-POMDPs) est un problème combinatoire difficile. Les algorithmes actuels cherchent pour chaque agent à travers l'espace complet des politiques sur les historiques. A cause de la croissance doublement exponentielle de cet espace quand l'horizon de planification croît, ces méthodes deviennent rapidement insolubles. Toutefois, dans des problèmes réels, calculer des politiques sur l'espace des historiques complet est souvent inutile. L'extraction des informations pertinentes d'un historique permet de réduire le nombre d'historiques utiles. Nous montrons qu'en transformant un Dec-POMDP en un MDP continu, nous sommes capables de trouver et exploiter ces représentations à faible dimensionalité. En utilisant cette nouvelle transformation, nous pouvons appliquer des techniques efficaces pour la résolution de POMDPs et de MDPs continus. En combinant un algorithme de recherche générique et une réduction de la dimensionalité fondée sur la sélection de caractéristiques, nous introduisons une nouvelle approche pour résoudre de manière optimale des problèmes avec des horizons de planification significativement plus grands que les méthodes antérieures

    Optimally solving Dec-POMDPs as Continuous-State MDPs: Theory and Algorithms

    Get PDF
    Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in cooperative decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDP with a piecewise-linear and convex value function. This approach makes use of the fact that planning can be accomplished in a centralized offline manner, while execution can still be distributed. This new Dec-POMDP formulation, which we call an occupancy MDP, allows powerful POMDP and continuous-state MDP methods to be used for the first time. When the curse of dimensionality becomes too prohibitive, we refine this basic approach and present ways to combine heuristic search and compact representations that exploit the structure present in multi-agent domains, without losing the ability to eventually converge to an optimal solution. In particular, we introduce feature-based heuristic search that relies on feature-based compact representations, point-based updates and efficient action selection. A theoretical analysis demonstrates that our feature-based heuristic search algorithms terminate in finite time with an optimal solution. We include an extensive empirical analysis using well known benchmarks, thereby demonstrating our approach provides significant scalability improvements compared to the state of the art.Les processus de décision markoviens partiellement observables décentralisés (Dec-POMDP) fournissent un modèle général pour la prise de décision dans l'incertain dans des cadres coopératifs décentralisés. En guise de nouvelle approche de résolution de ces problèmes, nous introduisons l'idée de transformer un Dec-POMDP en un MDP déterministe à espace d'états continu dont la fonction de valeur est linéaire par morceaux et convexe. Cette approche exploite le fait que la planification peut être effectuée d'une manière centralisée hors ligne, alors que l'exécution peut toujours être distribuée. Cette nouvelle formulation des Dec-POMDP, que nous appelons un occupancy MDP, permet pour la première fois d'employer de puissantes méthodes de résolution de POMDP et MDP à états continus. La malédiction de la dimensionalité devenant prohibitive, nous raffinons cette approche basique et présentons des façons de combiner la recherche heuristique et des représentations compactes qui exploitent la structure présente dans les domaines multi-agents, sans perdre la capacité de converger à terme vers une solution optimale. En particulier, nous introduisons une recherche heuristique qui repose sur des représentations compactes fondées sur des features, sur des mises-à-jour à base de points, et une sélection d'action efficace. Une analyse théorique démontre que nos algorithmes de recherche heuristique fondés sur des features se terminent en temps fini avec une solution optimale. Nous incluons une analyse empirique extensive utilisant des bancs d'essai bien connus, démontrant ainsi que notre approche améliore significativement le passage à l'échelle en comparaison de l'état de l'art

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    International audienceUsing touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems(GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user.We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE)

    Optimally Solving Dec-POMDPs as Continuous-State MDPs

    Get PDF
    International audienceOptimally solving decentralized partially observable Markov decision processes (Dec-POMDPs) is a hard combinatorial problem. Current algorithms search through the space of full histories for each agent. Because of the doubly exponential growth in the number of policies in this space as the planning horizon increases, these methods quickly become intractable. However, in real world problems, computing policies over the full history space is often unnecessary. True histories experienced by the agents often lie near a structured, low-dimensional manifold embedded into the history space. We show that by transforming a Dec-POMDP into a continuous-state MDP, we are able to find and exploit these low-dimensional representations. Using this novel transformation, we can then apply powerful techniques for solving POMDPs and continuous-state MDPs. By combining a general search algorithm and dimension reduction based on feature selection, we introduce a novel approach to optimally solve problems with significantly longer planning horizons than previous methods

    Contributions à la résolution des processus décisionnels de Markov centralisés et décentralisés: algorithmes et théorie

    Get PDF
    This thesis addresses the computational issues in sequential decision-making undervarious sources of uncertainty for either single or multi-agent systems. In particular, wewill be concerned in problems that can be modeled as Markov decision processes. In addition,we address extensions of Markov decision process, which involve uncertainty resultingfrom other agents in the world, and partial observability.We present several strategies that improve the performances of state-of-the-art techniquesfor solving Markov decision processes. We suggest a general method, namely topologicaldynamic programming, that exploits the causal relation between states to deal withtwo key issues. First, it detects the structure ofthe problem as a means of overcoming bothdimensionality and history curses. Secondly, it circumvents the problem of unnecessarybackups and builds optimal and approximate solutions based on a topological order inducedby the underlying problem structure. Moreover, we introduce the centralized planningfor distributed control of Markov decision processes. We provide theoretical basis forfurther work in that direction. This analysis results in a number of efficient exact as wellas approximate algorithms for solving decentralized Markov decision processes. Amongmany, point-based incremental pruning algorithm turns out to be the most efficient sofar. The centralized planning for distributed control opens a number avenues, includinga bounded sufficient statistic for a general model of decentralized Markov decision processes.Cette thèse porte sur les problèmes de prise de décisions séquentielles sous incertitudes dans un système mono ou multi-agents. Les processus décisionnels de Markov offrent un modèle mathématique permettant à la fois de formaliser et de résoudre de tels problèmes. Il existe de multiple travaux proposant des techniques efficaces pour la résolution des processus décisionnels de Markov. Néanmoins, trois facteurs, dits malédictions, limitent grandement le passage à l'échelle de ces techniques. Le premier facteur, la malédiction de la dimension, est le plus connu. Il lie la complexité des algorithmes au nombre d'états du système dont la croissance est exponentielle en fonction des attributs des états. Le second facteur, la malédiction de l'historique, a été identifié plus récemment. Il lie la complexité des algorithmes à la dimension exponentielle de l'espace des historiques à prendre en compte afin de résoudre le problème. Enfin, le dernier facteur, là malédiction de la distributivité, est identifié dans cette thèse. Il lie la complexité des algorithmes à la contrainte du contrôle distribué d'un système, résultant sur une complexité doublement exponentielle. À travers nos contributions, nous proposons une réponse à chacune des trois malédictions. Nous atténuons à la fois la malédiction de la dimension et la malédiction de l'historique en exploitant les dépendances causales soit entre états soit entre historiques. Suivant cette idée, nous proposons une famille d'algorithmes exacts et approximatifs, nommée programmation dynamique topologique, visant à résoudre les processus décisionnels de Markov complètement ou partiellement observables. Ces algorithmes permettent de réduire considérablement le nombre de mises à jour d'un état ou d'un historique. Ainsi, lorsque les problèmes sont munis d'une structure topologique, la programmation dynamique topologique offre une solution efficace. Pour pallier aux effets de la malédiction de la distributivité, nous avons proposé d'étendre la planification centralisée au cadre du contrôle distribué. Nous proposons une analyse formelle des problèmes de contrôle distribué des processus décisionnels de Markov sous le regard de la planification centralisée. De cette analyse, de nombreux algorithmes de planification centralisée ont vu le jour. Parmi eux, figure point-based incremental pruning (PBIP), l'algorithme approximatif pour la résolution des processus de Markov décentralisés, le plus efficace à ce jour
    corecore